Automatic Speechreading with Applications to Human-Computer Interfaces
نویسندگان
چکیده
منابع مشابه
Automatic Speechreading with Applications to Human-Computer Interfaces
There has been growing interest in introducing speech as a new modality into the human-computer interface (HCI). Motivated by the multimodal nature of speech, the visual component is considered to yield information that is not always present in the acoustic signal and enables improved system performance over acoustic-only methods, especially in noisy environments. In this paper, we investigate ...
متن کاملLanguage Model Applications to Spelling with Brain-Computer Interfaces
Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only r...
متن کاملFeature analysis for automatic speechreading
− Audio-Visual Automatic Speech Recognition systems use visual information to enhance ASR systems in clean and noisy environments. This paper compares of a number of different visual feature extraction methods. When performing visual speech recognition the visual feature vector requires a base level of detail for optimum recognition. Geometric feature extraction provides lower recognition than ...
متن کاملSpecifying And Prototyping Dynamic Human-Computer Interfaces For Stochastic Applications
Formal methods are increasingly being used to support the software engineering of complex systems. A number of limitations restrict the utility of these techniques for the design of human-computer interfaces. Firstly, formal notations frequently abstract away from the temporal properties that aaect usability. Secondly, speciications often fail to consider the stochastic, or probabilistic, behav...
متن کاملAutomatic speechreading of impaired speech
We investigate the use of visual, mouth-region information in improving automatic speech recognition (ASR) of the speech impaired. Given the video of an utterance by such a subject, we first extract appearance-based visual features from the mouth region-of-interest, and we use a feature fusion method to combine them with the subject’s audio features into bimodal observations. Subsequently, we a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: EURASIP Journal on Advances in Signal Processing
سال: 2002
ISSN: 1687-6172,1687-6180
DOI: 10.1155/s1110865702206137